Training Feedforward Neural Networks with Standard Logistic Activations is Feasible

نویسندگان

  • Emanuele Sansone
  • Francesco G. B. De Natale
چکیده

Training feedforward neural networks with standard logistic activations is considered difficult because of the intrinsic properties of these sigmoidal functions. This work aims at showing that these networks can be trained to achieve generalization performance comparable to those based on hyperbolic tangent activations. The solution consists on applying a set of conditions in parameter initialization, which have been derived from the study of the properties of a single neuron from an information-theoretic perspective. The proposed initialization is validated through an extensive experimental analysis.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using an Artificial Neural Network to Detect Activations during Ventricular Fibrillation

Ventricular fibrillation is a cardiac arrhythmia that can result in sudden death. Understanding and treatment of this disorder would be improved if patterns of electrical activation could be accurately identified and studied during fibrillation. A feedforward artificial neural network using backpropagation was trained with the Rule-Based Method and the Current Source Density Method to identify ...

متن کامل

HIOPGA: A New Hybrid Metaheuristic Algorithm to Train Feedforward Neural Networks for Prediction

Most of neural network training algorithms make use of gradient-based search and because of their disadvantages, researchers always interested in using alternative methods. In this paper to train feedforward, neural network for prediction problems a new Hybrid Improved Opposition-based Particle swarm optimization and Genetic Algorithm (HIOPGA) is proposed. The opposition-based PSO is utilized t...

متن کامل

Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons...

متن کامل

Multi-View Face Detection in Open Environments using Gabor Features and Neural Networks

Multi-view face detection in open environments is a challenging task, due to the wide variations in illumination, face appearances and occlusion. In this paper, a robust method for multi-view face detection in open environments, using a combination of Gabor features and neural networks, is presented. Firstly, the effect of changing the Gabor filter parameters (orientation, frequency, standard d...

متن کامل

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

The back-propagation (BP) algorithm has been considered the de-facto method for training deep neural networks. It back-propagates errors from the output layer to the hidden layers in an exact manner using the transpose of the feedforward weights. However, it has been argued that this is not biologically plausible because back-propagating error signals with the exact incoming weights is not cons...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1710.01013  شماره 

صفحات  -

تاریخ انتشار 2017